Overview of CLEF QA Entrance Exams Task 2015

نویسندگان

  • Álvaro Rodrigo
  • Anselmo Peñas
  • Yusuke Miyao
  • Eduard H. Hovy
  • Noriko Kando
چکیده

This paper describes the Entrance Exams task at the CLEF QA Track 2015. Following the last two editions, the data set has been extracted from actual university entrance examinations including a variety of topics and question types. Systems receive a set of Multiple-Choice Reading Comprehension tests where the task is to select the correct answer among a finite set of candidates, according to the given text. Questions are designed originally for testing human examinees, rather than evaluating computer systems. Therefore, the data set challenges human ability to show their understanding of texts. Thus, questions and answers are lexically distant from their supporting excerpts in text, requiring not only a high degree of textual inference, but also the development of strategies for selecting the correct answer.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Overview of CLEF QA Entrance Exams Task 2014

This paper describes the Entrance Exams task at the CLEF QA Track 2014. Following 2013 edition, the data set has been extracted from actual university entrance examinations including a variety of topics and question types. Systems receive a set of Multiple-Choice Reading Comprehension tests where the task is to select the correct answer among a finite set of candidates, according to the given t...

متن کامل

Overview of QA4MRE 2013 Entrance Exams Task

This paper describes the Question Answering for Machine Reading (QA4MRE) Entrance Exams at the 2013 Cross Language Evaluation Forum. The data set of this task is extracted from actual university entrance examinations as-is, and therefore includes a variety of topics in daily life. Another unique feature of the Entrance Exams task is that questions are designed originally for testing human exami...

متن کامل

Overview of the NTCIR-11 QA-Lab Task

This paper describes an overview of the first QA Lab (Question Answering Lab for Entrance Exam) task at NTCIR 11. The goal of the QA lab is to provide a module-based platform for advanced question answering systems and comparative evaluation for solving real-world university entrance exam questions. In this task, “world history” questions are selected from The National Center Test for Universit...

متن کامل

CoMiC: Exploring Text Segmentation and Similarity in the English Entrance Exams Task

This paper describes our contribution to the English Entrance Exams task of CLEF 2015, which requires participating systems to automatically solve multiple choice reading comprehension tasks. We use a combination of text segmentation and different similarity measures with the aim of exploiting two observed aspects of tests: 1) the often linear relationship between reading text and test question...

متن کامل

QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation

This paper describes the methodology for testing the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. This was the attempt of the QA4MRE challenge which was run as a Lab at CLEF 2011–2013. The traditional QA task was replaced by a new Machine Reading task, whose intention was to ask questions that required a deep knowledge of individual short te...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015